168 research outputs found

    Accurate Yield Curve Scenarios Generation using Functional Gradient Descent

    Get PDF
    We propose a multivariate nonparametric technique for generating reliable historical yield curve scenarios and confidence intervals. The approach is based on a Functional Gradient Descent (FGD) estimation of the conditional mean vector and volatility matrix of a multivariate interest rate series. It is computationally feasible in large dimensions and it can account for non-linearities in the dependence of interest rates at all available maturities. Based on FGD we apply filtered historical simulation to compute reliable out-of-sample yield curve scenarios and confidence intervals. We back-test our methodology on daily USD bond data for forecasting horizons from 1 to 10 days. Based on several statistical performance measures we find significant evidence of a higher predictive power of our method when compared to scenarios generating techniques based on (i) factor analysis, (ii) a multivariate CCC-GARCH model, or (iii) an exponential smoothing volatility estimators as in the RiskMetrics approachConditional mean and volatility estimation; Filtered Historical Simulation; Functional Gradient Descent; Term structure; Multivariate CCC-GARCH models

    Accurate Short-Term Yield Curve Forecasting using Functional Gradient Descent

    Get PDF
    We propose a multivariate nonparametric technique for generating reliable shortterm historical yield curve scenarios and confidence intervals. The approach is based on a Functional Gradient Descent (FGD) estimation of the conditional mean vector and covariance matrix of a multivariate interest rate series. It is computationally feasible in large dimensions and it can account for non-linearities in the dependence of interest rates at all available maturities. Based on FGD we apply filtered historical simulation to compute reliable out-of-sample yield curve scenarios and confidence intervals. We back-test our methodology on daily USD bond data for forecasting horizons from 1 to 10 days. Based on several statistical performance measures we find significant evidence of a higher predictive power of our method when compared to scenarios generating techniques based on (i) factor analysis, (ii) a multivariate CCC-GARCH model, or (iii) an exponential smoothing covariances estimator as in the RiskMetricsTM approach.Conditional mean and variance estimation, Filtered Historical Simulation, Functional Gradient Descent, Term structure; Multivariate CCC-GARCH models

    A general multivariate threshold GARCH model with dynamic conditional correlations

    Get PDF
    We propose a new multivariate GARCH model with Dynamic Conditional Correlations that extends previous models by admitting multivariate thresholds in conditional volatilities and correlations. The model estimation is feasible in large dimensions and the positive deniteness of the conditional covariance matrix is easily ensured by the structure of the model. Thresholds in conditional volatilities and correlations are estimated from the data, together with all other model parameters. We study the performance of our model in three distinct applications to US stock and bond market data. Even if the conditional volatility functions of stock returns exhibit pronounced GARCH and threshold features, their conditional correlation dynamics depends on a very simple threshold structure with no local GARCH features. We obtain a similar result for the conditional correlations between government and corporate bond returns. On the contrary, we ¯nd both threshold and GARCH structures in the conditional correlations between stock and government bond returns. In all applications, our model improves signi¯cantly the in-sample and out-of-sample forecasting power for future conditional correlations with respect to other relevant multivariate GARCH models.Multivariate GARCH models, Dynamic conditional correlations, Tree-structured GARCH models

    Ambiguity Aversion and the Term Structure of Interest Rates

    Get PDF
    This paper studies the term structure implications of a simple structural economy in which the representative agent displays ambiguity aversion, modeled by Multiple Priors Recursive Utility. Bond excess returns reflect a premium for ambiguity, which is observationally distinct from the risk premium of affine yield curve models. The ambiguity premium can be large even in the simplest logutility model and is non zero also for stochastic factors that have a zero risk premium. A calibrated low-dimensional two-factor economy with ambiguity is able to reproduce the deviations from the expectations hypothesis documented in the literature, without modifying in a substantial way the nonlinear mean reversion dynamics of the short interest rate. In this economy, we do not find any apparent tradeoffs between fitting the first and second moments of the yield curve and the large equity premium.General Equilibrium, Term Structure of Interest Rates, Ambiguity Aversion, Expectations Hypothesis, Campbell-Shiller Regression

    Optimal Conditionally Unbiased Bounded-Influence Inference in Dynamic Location and Scale Models

    Get PDF
    This paper studies the local robustness of estimators and tests for the conditional location and scale parameters in a strictly stationary time series model. We first derive optimal bounded-influence estimators for such settings under a conditionally Gaussian reference model. Based on these results, optimal bounded-influence versions of the classical likelihood-based tests for parametric hypotheses are obtained. We propose a feasible and efficient algorithm for the computation of our robust estimators, which makes use of analytical Laplace approximations to estimate the auxiliary recentering vectors ensuring Fisher consistency in robust estimation. This strongly reduces the necessary computation time by avoiding the simulation of multidimensional integrals, a task that has typically to be addressed in the robust estimation of nonlinear models for time series. In some Monte Carlo simulations of an AR(1)-ARCH(1) process we show that our robust procedures maintain a very high efficiency under ideal model conditions and at the same time perform very satisfactorily under several forms of departure from conditional normality. On the contrary, classical Pseudo Maximum Likelihood inference procedures are found to be highly inefficient under such local model misspecifications. These patterns are confirmed by an application to robust testing for ARCH.Time series models, M-estimators, influence function, robust estimation and testing

    Essays in asset pricing

    Get PDF
    My dissertation consists of three chapters, each of which focuses on a different area of research in asset pricing. The first chapter's focal point is the measurement of the premium for jump risks in index option markets. The second chapter is devoted to non- parametric measurement of pricing kernel dispersion. The third chapter contributes to the literature on latent state variable recovery in option pricing models. In the first chapter, "Big risk", I show how to replicate a large family of high-frequency measures of realised return variation using dynamically rebalanced option portfolios. With this technology investors can generate optimal hedging payoffs for realised variance and several measures of realised jump variation in incomplete option markets. These trading strategies induce excess payoffs that are direct compensation for second- and higher order risk exposure in the market for (index) options. Sample averages of these excess payoffs are natural estimates of risk premia associated with second- and higher order risk exposures. In an application to the market for short-maturity European options on the S&P500 index, I obtain new important evidence about the pricing of variance and jump risk. I find that the variance risk premium is positive during daytime, when the hedging frequency is high enough, and negative during night-time. Similarly, for an investor taking long variance positions, daytime profits are grater in absolute value than night-time losses. Compensation for big risk is mostly available overnight. The premium for jump skewness risk is positive, while the premium for jump quarticity is negative (contrary to variance, also during the trading day). The risk premium for big risk is concentrated in states with large recent big risk realisations. In the second chapter, "Arbitrage free dispersion", co-authored with Andras Sali and Fabio Trojani, we develop a theory of arbitrage-free dispersion (AFD) which allows for direct insights into the dependence structure of the pricing kernel and stock returns, and which characterizes the testable restrictions of asset pricing models. Arbitrage-free dispersion arises as a consequence of Jensen's inequality and the convexity of the cumulant generating function of the pricing kernel and returns. It implies a wide family of model-free dispersion constraints, which extend the existing literature on dispersion and co-dispersion bounds. The new techniques are applicable within a unifying approach in multivariate and multiperiod settings. In an empirical application, we find that the dispersion of stationary and martingale pricing kernel components in a benchmark long-run risk model yields a counterfactual dependence of short- vs. long- maturity bond returns and is insufficient for pricing optimal portfolios of market equity and short-term bonds. In the third chapter, "State recovery from option data through variation swap rates in the presence of unspanned skewness", I show that a certain class of variance and skew swaps can be thought of as sufficient statistics of the implied volatility surface in the context of uncovering the conditional dynamics of second and third moments of index returns. I interpret the slope of the Cumulant Generating Function of index returns in the context of tradable swap contracts, which nest the standard variance swap, and share its fundamental linear pricing property in the class of Affine Jump Diffusion models. Equipped with variance- and skew-pricing contracts, I investigate the performance of a range of state variable filtering setups in the context of the stylized facts uncovered by the recent empirical option pricing literature, which underlines the importance of decoupling the drivers of stochastic volatility from those of stochastic (jump) skewness. The linear pricing structure of the contracts allows for an exact evaluation of the impact of state variables on the observed prices. This simple pricing structure allows me to design improved low-dimensional state-space filtering setups for estimating AJD models. In a simulated setting, I show that in the presence of unspanned skewness, a simple filtering setup which includes only prices of skew and variance swaps offers significant improvements over a high-dimensional filter which treats all observed option prices as observable inputs

    Limits of Learning about a Categorical Latent Variable under Prior Near-Ignorance

    Get PDF
    In this paper, we consider the coherent theory of (epistemic) uncertainty of Walley, in which beliefs are represented through sets of probability distributions, and we focus on the problem of modeling prior ignorance about a categorical random variable. In this setting, it is a known result that a state of prior ignorance is not compatible with learning. To overcome this problem, another state of beliefs, called \emph{near-ignorance}, has been proposed. Near-ignorance resembles ignorance very closely, by satisfying some principles that can arguably be regarded as necessary in a state of ignorance, and allows learning to take place. What this paper does, is to provide new and substantial evidence that also near-ignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is \emph{latent}. We argue that such a setting is by far the most common case in practice, and we provide, for the case of categorical latent variables (and general \emph{manifest} variables) a condition that, if satisfied, prevents learning to take place under prior near-ignorance. This condition is shown to be easily satisfied even in the most common statistical problems. We regard these results as a strong form of evidence against the possibility to adopt a condition of prior near-ignorance in real statistical problems.Comment: 27 LaTeX page

    Learning about a Categorical Latent Variable under Prior Near-Ignorance

    Full text link
    It is well known that complete prior ignorance is not compatible with learning, at least in a coherent theory of (epistemic) uncertainty. What is less widely known, is that there is a state similar to full ignorance, that Walley calls near-ignorance, that permits learning to take place. In this paper we provide new and substantial evidence that also near-ignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is latent. We argue that such a setting is by far the most common case in practice, and we show, for the case of categorical latent variables (and general manifest variables) that there is a sufficient condition that, if satisfied, prevents learning to take place under prior near-ignorance. This condition is shown to be easily satisfied in the most common statistical problems.Comment: 15 LaTeX page

    On the Informational Content of Changing Risk for Dynamic Asset Allocation

    Get PDF
    The informational content of changing risk for dynmaic asset allocation is analyzed in order to investigate its importance in determining expected index returns. We consider a class of optimal dynamic strategies taking into account both changing risk and expected returns that vary accordingly to changing risk. We compare their risk adjusted performance to that of a buy and hold strategy under different hypotheses on the form of conditionally expected returns. The statistical evidence in favour of expected returns varying accordingly to changing risk is elusive. On the other hand, we find some evidence of a superior unconditional risk adjusted performance of volatility based trading rules compared to buy and hold strategies. This suggests that changing risk conveys information useful to improve performance.

    Aligning capital with risk

    Get PDF
    The interaction of capital and risk is of primary interest in the corporate governance of banks as it links operational profitability and strategic risk management. Senior executives understand that their organization's monitoring system strongly affects the behaviour of managers and employees. Typical instruments used by senior executives to focus on strategy are balanced scorecards with objectives for performance and risk management, including an according payroll process. A top-down capital-at-risk concept gives the executive board the desired control of the operative behaviour of all risk takers. It guarantees uniform compensations for business risks taken in any division or business area. The standard theory of cost-of-capital assumes standardized assets. Return distributions are equally normalized to a one-year risk horizon. It must be noted that risk measurement and management for any individual risk factor has a bottom-up design. The typical risk horizon for trading positions is 10 days, 1 month for treasury positions, 1 year for operational risks and even longer for credit risks. My contribution to the discussion is as follows: in the classical theory, one determines capital requirements and risk measurement using a top-down approach, without specifying market and regulation standards. In my thesis I show how to close the gap between bottom-up risk modelling and top-down capital alignment. I dedicate a separate paper to each risk factor and its application in risk capital management
    corecore